Skip to content

Conversation

@guicho271828
Copy link
Contributor

No description provided.

@mergify
Copy link

mergify bot commented Jan 6, 2026

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🟢 Enforce conventional commit

Wonderful, this rule succeeded.

Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/

  • title ~= ^(fix|feat|docs|style|refactor|perf|test|build|ci|chore|revert|release)(?:\(.+\))?:

@guicho271828 guicho271828 changed the title Llguidance refactor: llguidance Jan 6, 2026
Copy link
Contributor

@jakelorocco jakelorocco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like there's some mypy errors as well.

@guicho271828
Copy link
Contributor Author

Looks like there's some mypy errors as well.

I don't know why there is a mypy error. Mypy passes locally for me.

@guicho271828 guicho271828 force-pushed the llguidance branch 2 times, most recently from 3ebb8bd to 8dce1f2 Compare January 7, 2026 19:29
@nrfulton
Copy link
Contributor

nrfulton commented Jan 7, 2026

Looks like there's some mypy errors as well.

I don't know why there is a mypy error. Mypy passes locally for me.

We've seen this happen recently and it came down to mypy versions. Could be the cause. I re-ran the checks on the latest branch. If those still fail and you still pass with mypy, try checking your mypy version.

@guicho271828
Copy link
Contributor Author

With an updated lockfile the mypy passes. Test is terminated but locally it is passing. If you approve, I will merge this

Copy link
Contributor

@jakelorocco jakelorocco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guicho271828, can you remind me if we ever got the local vllm backend running on mac (or if you remember a discussion about that)?

I tried installing this version of the mellea package on my mac and was getting versioning errors.

Details

(mellea) ➜  mellea git:(pr/guicho271828/288) uv pip install -e '.[all]' --all-extras --group dev -r pyproject.toml
Using Python 3.12.0 environment at: /opt/homebrew/Caskroom/miniforge/base/envs/mellea
  × No solution found when resolving dependencies:
  ╰─▶ Because only the following versions of nvidia-cudnn-frontend are available:
          nvidia-cudnn-frontend<=1.13.0
          nvidia-cudnn-frontend==1.14.0
          nvidia-cudnn-frontend==1.14.1
          nvidia-cudnn-frontend==1.15.0
          nvidia-cudnn-frontend==1.16.0
          nvidia-cudnn-frontend==1.17.0
      and nvidia-cudnn-frontend>=1.13.0 has no wheels with a matching platform tag (e.g., `macosx_15_0_arm64`), we can conclude that
      nvidia-cudnn-frontend>=1.13.0 cannot be used.
      And because flashinfer-python==0.5.3 depends on nvidia-cudnn-frontend>=1.13.0 and vllm==0.13.0 depends on flashinfer-python==0.5.3, we
      can conclude that vllm==0.13.0 cannot be used.
      And because only vllm<=0.13.0 is available and mellea depends on vllm>=0.13.0, we can conclude that your requirements are
      unsatisfiable.

It looks like the newest version I can install on a mac is 0.11.0.

The vllm engine can't seem to get enough resources to run locally on my mac anyways, so maybe we can just add a note somewhere that "mella[vllm]" doesn't work for macs?

@guicho271828
Copy link
Contributor Author

@guicho271828, can you remind me if we ever got the local vllm backend running on mac (or if you remember a discussion about that)?

I tried installing this version of the mellea package on my mac and was getting versioning errors.
Details

The vllm engine can't seem to get enough resources to run locally on my mac anyways, so maybe we can just add a note somewhere that "mella[vllm]" doesn't work for macs?

To run vllm locally, vllm assumes

NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, and TPU

So yes mella[vllm] does not work on Mac.

@guicho271828
Copy link
Contributor Author

@guicho271828, can you remind me if we ever got the local vllm backend running on mac (or if you remember a discussion about that)?

I tried installing this version of the mellea package on my mac and was getting versioning errors.
Details

(mellea) ➜  mellea git:(pr/guicho271828/288) uv pip install -e '.[all]' --all-extras --group dev -r pyproject.toml
Using Python 3.12.0 environment at: /opt/homebrew/Caskroom/miniforge/base/envs/mellea
  × No solution found when resolving dependencies:
  ╰─▶ Because only the following versions of nvidia-cudnn-frontend are available:
          nvidia-cudnn-frontend<=1.13.0
          nvidia-cudnn-frontend==1.14.0
          nvidia-cudnn-frontend==1.14.1
          nvidia-cudnn-frontend==1.15.0
          nvidia-cudnn-frontend==1.16.0
          nvidia-cudnn-frontend==1.17.0
      and nvidia-cudnn-frontend>=1.13.0 has no wheels with a matching platform tag (e.g., `macosx_15_0_arm64`), we can conclude that
      nvidia-cudnn-frontend>=1.13.0 cannot be used.
      And because flashinfer-python==0.5.3 depends on nvidia-cudnn-frontend>=1.13.0 and vllm==0.13.0 depends on flashinfer-python==0.5.3, we
      can conclude that vllm==0.13.0 cannot be used.
      And because only vllm<=0.13.0 is available and mellea depends on vllm>=0.13.0, we can conclude that your requirements are
      unsatisfiable.

It looks like the newest version I can install on a mac is 0.11.0.

The vllm engine can't seem to get enough resources to run locally on my mac anyways, so maybe we can just add a note somewhere that "mella[vllm]" doesn't work for macs?

29588fb disables installing vllm on darwin.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants